Time-Series Regression and Generalized Least Squares in R An Appendix to An R Companion to Applied Regression, Second Edition

نویسنده

  • John Fox
چکیده

Generalized least-squares (GLS ) regression extends ordinary least-squares (OLS) estimation of the normal linear model by providing for possibly unequal error variances and for correlations between different errors. A common application of GLS estimation is to time-series regression, in which it is generally implausible to assume that errors are independent. This appendix to Fox and Weisberg (2011) briefly reviews GLS estimation and demonstrates its application to time-series data using the gls function in the nlme package, which is part of the standard R distribution. 1 Generalized Least Squares In the standard linear model (for example, in Chapter 4 of the text), y = X + " where y is the n × 1 response vector; X is an n × k + 1 model matrix; is a k + 1 × 1 vector of regression coefficients to estimate; and " is an n×1 vector of errors. Assuming that " ∼ Nn(0, In) leads to the familiar ordinary-least-squares (OLS ) estimator of , bOLS = (X ′X)−1X′y with covariance matrix Var(bOLS) = 2(X′X)−1 Let us, however, assume more generally that " ∼ Nn(0,Σ), where the error covariance matrix Σ is symmetric and positive-definite. Different diagonal entries in Σ correspond to non-constant error variances, while nonzero off-diagonal entries correspond to correlated errors. Suppose, for the time-being, that Σ is known. Then, the log-likelihood for the model is loge L( ) = − n 2 loge 2 − 1 2 loge(det Σ)− 1 2(y −X ) ′Σ−1(y −X ) which is maximimized by the generalized-least-squares (GLS ) estimator of , bGLS = (X ′Σ−1X)−1X′Σ−1y with covariance matrix Var(bGLS) = (X ′Σ−1X)−1

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Time-Series Regression and Generalized Least Squares Appendix to An R and S-PLUS Companion to Applied Regression

with covariance matrix V (bOLS) = σ (XX) Let us, however, assume more generally that ε ∼ Nn(0,Σ), where the error-covariance matrix Σ is symmetric and positive-definite. Different diagonal entries in Σ correspond to non-constant error variances, while nonzero off-diagonal entries correspond to correlated errors. Suppose, for the time-being, that Σ is known. Then, the log-likelihood for the mode...

متن کامل

Nonparametric Regression in R An Appendix to An R Companion to Applied Regression, Second Edition

In traditional parametric regression models, the functional form of the model is specified before the model is fit to data, and the object is to estimate the parameters of the model. In nonparametric regression, in contrast, the object is to estimate the regression function directly without specifying its form explicitly. In this appendix to Fox and Weisberg (2011), we describe how to fit sever...

متن کامل

Comparison of Maximum Likelihood Estimation and Bayesian with Generalized Gibbs Sampling for Ordinal Regression Analysis of Ovarian Hyperstimulation Syndrome

Background and Objectives: Analysis of ordinal data outcomes could lead to bias estimates and large variance in sparse one. The objective of this study is to compare parameter estimates of an ordinal regression model under maximum likelihood and Bayesian framework with generalized Gibbs sampling. The models were used to analyze ovarian hyperstimulation syndrome data.   Methods: This study use...

متن کامل

Least-squares support vector machine and its application in the simultaneous quantitative spectrophotometric determination of pharmaceutical ternary mixture

This paper proposes the least-squares support vector machine (LS-SVM) as an intelligent method applied on absorption spectra for the simultaneous determination of paracetamol (PCT), caffeine (CAF) and ibuprofen (IB) in Novafen. The signal to noise ratio (S/N) increased. Also, In the LS - SVM model, Kernel parameter (σ2) and capacity factor (C) were optimized. Excellent prediction was shown usin...

متن کامل

Robust high-dimensional semiparametric regression using optimized differencing method applied to the vitamin B2 production data

Background and purpose: By evolving science, knowledge, and technology, we deal with high-dimensional data in which the number of predictors may considerably exceed the sample size. The main problems with high-dimensional data are the estimation of the coefficients and interpretation. For high-dimension problems, classical methods are not reliable because of a large number of predictor variable...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2010